16 research outputs found
FoleyGen: Visually-Guided Audio Generation
Recent advancements in audio generation have been spurred by the evolution of
large-scale deep learning models and expansive datasets. However, the task of
video-to-audio (V2A) generation continues to be a challenge, principally
because of the intricate relationship between the high-dimensional visual and
auditory data, and the challenges associated with temporal synchronization. In
this study, we introduce FoleyGen, an open-domain V2A generation system built
on a language modeling paradigm. FoleyGen leverages an off-the-shelf neural
audio codec for bidirectional conversion between waveforms and discrete tokens.
The generation of audio tokens is facilitated by a single Transformer model,
which is conditioned on visual features extracted from a visual encoder. A
prevalent problem in V2A generation is the misalignment of generated audio with
the visible actions in the video. To address this, we explore three novel
visual attention mechanisms. We further undertake an exhaustive evaluation of
multiple visual encoders, each pretrained on either single-modal or multi-modal
tasks. The experimental results on VGGSound dataset show that our proposed
FoleyGen outperforms previous systems across all objective metrics and human
evaluations
Stack-and-Delay: a new codebook pattern for music generation
In language modeling based music generation, a generated waveform is
represented by a sequence of hierarchical token stacks that can be decoded
either in an auto-regressive manner or in parallel, depending on the codebook
patterns. In particular, flattening the codebooks represents the highest
quality decoding strategy, while being notoriously slow. To this end, we
propose a novel stack-and-delay style of decoding strategy to improve upon the
flat pattern decoding where generation speed is four times faster as opposed to
vanilla flat decoding. This brings the inference time close to that of the
delay decoding strategy, and allows for faster inference on GPU for small batch
sizes. For the same inference efficiency budget as the delay pattern, we show
that the proposed approach performs better in objective evaluations, almost
closing the gap with the flat pattern in terms of quality. The results are
corroborated by subjective evaluations which show that samples generated by the
new model are slightly more often preferred to samples generated by the
competing model given the same text prompts
Enhance audio generation controllability through representation similarity regularization
This paper presents an innovative approach to enhance control over audio
generation by emphasizing the alignment between audio and text representations
during model training. In the context of language model-based audio generation,
the model leverages input from both textual and audio token representations to
predict subsequent audio tokens. However, the current configuration lacks
explicit regularization to ensure the alignment between the chosen text
representation and the language model's predictions. Our proposal involves the
incorporation of audio and text representation regularization, particularly
during the classifier-free guidance (CFG) phase, where the text condition is
excluded from cross attention during language model training. The aim of this
proposed representation regularization is to minimize discrepancies in audio
and text similarity compared to other samples within the same training batch.
Experimental results on both music and audio generation tasks demonstrate that
our proposed methods lead to improvements in objective metrics for both audio
and music generation, as well as an enhancement in the human perception for
audio generation.Comment: 5 page
Exploring Speech Enhancement for Low-resource Speech Synthesis
High-quality and intelligible speech is essential to text-to-speech (TTS)
model training, however, obtaining high-quality data for low-resource languages
is challenging and expensive. Applying speech enhancement on Automatic Speech
Recognition (ASR) corpus mitigates the issue by augmenting the training data,
while how the nonlinear speech distortion brought by speech enhancement models
affects TTS training still needs to be investigated. In this paper, we train a
TF-GridNet speech enhancement model and apply it to low-resource datasets that
were collected for the ASR task, then train a discrete unit based TTS model on
the enhanced speech. We use Arabic datasets as an example and show that the
proposed pipeline significantly improves the low-resource TTS system compared
with other baseline methods in terms of ASR WER metric. We also run empirical
analysis on the correlation between speech enhancement and TTS performances.Comment: Submitted to ICASSP 202
A Context-Driven Modelling Framework for Dynamic Authentication Decisions
International audienceNowadays, many mechanisms exist to perform authentication, such as text passwords and biometrics. However, reasoning about their relevance (e.g., the appropriateness for security and usability) regarding the contextual situation is challenging for authentication system designers. In this paper, we present a Context-driven Modelling Framework for dynamic Authentication decisions (COFRA), where the context information specifies the relevance of authentication mechanisms. COFRA is based on a precise metamodel that reveals framework abstractions and a set of constraints that specify their meaning. Therefore, it provides a language to determine the relevant authentication mechanisms (characterized by properties that ensure their appropriateness) in a given context. The framework supports the adaptive authentication system designers in the complex trade-off analysis between context information, risks and authentication mechanisms, according to usability, deployability, security, and privacy. We validate the proposed framework through case studies and extensive exchanges with authentication and modelling experts. We show that model instances describing real-world use cases and authentication approaches proposed in the literature can be instantiated validly according to our metamodel. This validation highlights the necessity, sufficiency, and soundness of our framework
Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)
In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field
Towards a Better Understanding of Impersonation Risks
International audienceIn many situations, it is of interest for authentication systems to adapt to context (e.g., when the user's behavior differs from the previous behavior). Hence, during authentication events, it is common to use contextually available features to calculate an impersonation risk score. This paper proposes an explainability model that can be used for authentication decisions and, in particular, to explain the impersonation risks that arise during suspicious authentication events (e.g., at unusual times or locations). The model applies Shapley values to understand the context behind the risks. Through a case study on 30,000 real world authentication events, we show that risky and nonrisky authentication events can be grouped according to similar contextual features, which can explain the risk of impersonation differently and specifically for each authentication event. Hence, explainability models can effectively improve our understanding of impersonation risks. The risky authentication events can be classified according to attack types. The contextual explanations of the impersonation risk can help authentication policymakers and regulators who attempt to provide the right authentication mechanisms, to understand the suspiciousness of an authentication event and the attack type, and hence to choose the suitable authentication mechanism
Towards a Better Understanding of Impersonation Risks
International audienceIn many situations, it is of interest for authentication systems to adapt to context (e.g., when the user's behavior differs from the previous behavior). Hence, during authentication events, it is common to use contextually available features to calculate an impersonation risk score. This paper proposes an explainability model that can be used for authentication decisions and, in particular, to explain the impersonation risks that arise during suspicious authentication events (e.g., at unusual times or locations). The model applies Shapley values to understand the context behind the risks. Through a case study on 30,000 real world authentication events, we show that risky and nonrisky authentication events can be grouped according to similar contextual features, which can explain the risk of impersonation differently and specifically for each authentication event. Hence, explainability models can effectively improve our understanding of impersonation risks. The risky authentication events can be classified according to attack types. The contextual explanations of the impersonation risk can help authentication policymakers and regulators who attempt to provide the right authentication mechanisms, to understand the suspiciousness of an authentication event and the attack type, and hence to choose the suitable authentication mechanism
Towards a Better Understanding of Impersonation Risks Anonymous
International audienceIn many situations, it is of interest for authentication systems to adapt to context (e.g., when the user's behavior differs from the previous behavior). Hence, during authentication events, it is common to use contextually available features to calculate an impersonation risk score. This paper proposes an explainability model that can be used for authentication decisions and, in particular, to explain the impersonation risks that arise during suspicious authentication events (e.g., at unusual times or locations). The model applies Shapley values to understand the context behind the risks. Through a case study on 30,000 real world authentication events, we show that risky and nonrisky authentication events can be grouped according to similar contextual features, which can explain the risk of impersonation differently and specifically for each authentication event. Hence, explainability models can effectively improve our understanding of impersonation risks. The risky authentication events can be classified according to attack types. The contextual explanations of the impersonation risk can help authentication policymakers and regulators who attempt to provide the right authentication mechanisms, to understand the suspiciousness of an authentication event and the attack type, and hence to choose the suitable authentication mechanism
Towards a Better Understanding of Impersonation Risks
International audienceIn many situations, it is of interest for authentication systems to adapt to context (e.g., when the user's behavior differs from the previous behavior). Hence, during authentication events, it is common to use contextually available features to calculate an impersonation risk score. This paper proposes an explainability model that can be used for authentication decisions and, in particular, to explain the impersonation risks that arise during suspicious authentication events (e.g., at unusual times or locations). The model applies Shapley values to understand the context behind the risks. Through a case study on 30,000 real world authentication events, we show that risky and nonrisky authentication events can be grouped according to similar contextual features, which can explain the risk of impersonation differently and specifically for each authentication event. Hence, explainability models can effectively improve our understanding of impersonation risks. The risky authentication events can be classified according to attack types. The contextual explanations of the impersonation risk can help authentication policymakers and regulators who attempt to provide the right authentication mechanisms, to understand the suspiciousness of an authentication event and the attack type, and hence to choose the suitable authentication mechanism